-
Notifications
You must be signed in to change notification settings - Fork 51
✨ feat: optimize log reporting performance with batch sending #251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
…ce network requests - Add LogBatcher class to implement log batching logic - Support batching strategy based on entry count and payload size limits - Add configuration options: batchLogs, batchLogsSize, batchPayloadLimit - Implement flushLogs method to manually flush pending logs - Update TypeScript type definitions and documentation - Add comprehensive unit tests covering various batching scenarios
WalkthroughAdds configurable log batching: constants and multipart-size helpers, a LogBatcher class with append/flush, integration into ReportPortalClient with batch send/flush (including flush on finishLaunch), TypeScript declarations, tests, and README docs for three new config options and a flushLogs API. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant RPClient
participant LogBatcher
participant REST_API
Client->>RPClient: saveLog(logData)
alt batchLogs enabled
RPClient->>LogBatcher: append(logRequest)
LogBatcher-->>LogBatcher: accumulate (count & payload)
alt batch full (count or payload)
LogBatcher->>RPClient: return batch
RPClient->>REST_API: sendLogBatch(multipart stream)
REST_API-->>RPClient: response
RPClient-->>Client: resolve/push result(s)
else accumulating
LogBatcher-->>RPClient: pending (tempId + promise)
RPClient-->>Client: return tempId & promise
end
else
RPClient->>REST_API: send individual log
REST_API-->>RPClient: response
RPClient-->>Client: result
end
sequenceDiagram
participant Client
participant RPClient
participant LogBatcher
participant REST_API
Client->>RPClient: flushLogs()
RPClient->>LogBatcher: flush()
LogBatcher-->>RPClient: current batch (if any)
alt batch exists
RPClient->>RPClient: buildBatchMultiPartStream(batch)
RPClient->>REST_API: sendLogBatch(multipart stream)
REST_API-->>RPClient: response
else
Note over RPClient: nothing to flush
end
RPClient-->>Client: Promise resolved
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @lib/logs/batcher.js:
- Around line 16-25: In appendInternal, when size >= this.payloadLimit and
this.batch.length > 0 the current code returns the existing batch but drops
logReq; change it to return the existing batch while queuing the oversized log
for the next flush by: capture const { batch } = this; set this.batch =
[logReq]; set this.payloadSize = size; return batch. This preserves the returned
batch and ensures the oversized logReq is not lost; alternatively you can choose
to return [logReq] immediately and keep this.batch intact, but be explicit which
behavior you pick.
In @lib/report-portal-client.js:
- Around line 831-877: Replace the direct UniqId() call with this.getUniqId()
inside sendLogViaBatcher to match the codebase convention, and make the promise
resolution consistent: when this.logBatcher.append(logRequest) returns null,
resolve the associated promise with the same shape/value that
sendLogBatch(batch).then would resolve with (e.g., resolve a standardized
response or null with the same field names) so callers receive a consistent
response via this.map[tempId].promiseFinish; keep references to tempId,
this.map[tempId], sendLogBatch, logBatcher.append, and promiseFinish when
implementing the change.
🧹 Nitpick comments (5)
lib/commons/config.js (1)
120-122: Consider validating numeric batch options.The nullish coalescing handles missing values correctly, but there's no validation for invalid values (e.g., negative numbers, zero, or non-numeric types). A user passing
batchLogsSize: -1orbatchPayloadLimit: 'abc'could cause unexpected behavior inLogBatcher.💡 Suggested validation
- batchLogs: options.batchLogs ?? false, - batchLogsSize: options.batchLogsSize ?? MAX_LOG_BATCH_SIZE, - batchPayloadLimit: options.batchPayloadLimit ?? MAX_LOG_BATCH_PAYLOAD_SIZE, + batchLogs: options.batchLogs ?? false, + batchLogsSize: Number.isInteger(options.batchLogsSize) && options.batchLogsSize > 0 + ? options.batchLogsSize + : MAX_LOG_BATCH_SIZE, + batchPayloadLimit: Number.isInteger(options.batchPayloadLimit) && options.batchPayloadLimit > 0 + ? options.batchPayloadLimit + : MAX_LOG_BATCH_PAYLOAD_SIZE,lib/logs/helpers.js (1)
25-31: Potential size miscalculation for array-like content.The fallback at line 29-30 assumes
file.content.lengthrepresents byte size, but for array-like objects (e.g.,Uint8Array, typed arrays), this is correct, while for a plain array of objects it would not be. Consider being more explicit about supported types.💡 Suggested refinement
if (Buffer.isBuffer(file.content)) { fileSize = file.content.length; } else if (typeof file.content === 'string') { fileSize = Buffer.byteLength(file.content, 'utf8'); - } else if (file.content.length !== undefined) { + } else if (ArrayBuffer.isView(file.content)) { + fileSize = file.content.byteLength; + } else if (typeof file.content.length === 'number') { fileSize = file.content.length; }lib/logs/batcher.js (2)
27-34: Edge case: empty batch when payload limit exceeded.If
this.payloadSize + size >= this.payloadLimitbutthis.batch.length === 0(e.g., after the oversized log fix above), the code falls through without returning, which is correct. However, the conditional at line 28 makes the structure slightly confusing.💡 Clearer structure
if (this.payloadSize + size >= this.payloadLimit) { - if (this.batch.length > 0) { - const { batch } = this; - this.batch = [logReq]; - this.payloadSize = size; - return batch; - } + const { batch } = this; + this.batch = [logReq]; + this.payloadSize = size; + if (batch.length > 0) { + return batch; + } + return null; } - - this.batch.push(logReq); - this.payloadSize += size;This makes the empty-batch case explicit rather than relying on fall-through.
69-75: Getter namebatchSizemay be misleading.The getter returns
this.batch.length(current count), but based on the JSDoc context and typical naming conventions,batchSizecould be confused with the maximum batch size (entryNum). Consider renaming tocurrentBatchLengthorpendingCountfor clarity.__tests__/log-batcher.spec.js (1)
259-345: Consider adding a test for undefined file property.The edge case tests are comprehensive, but there's no explicit test for when
fileproperty isundefined(vs explicitlynull). This could be relevant since the implementation checkslogReq.fileinsendLogViaBatcher.📝 Suggested additional test case
it('should handle undefined file property', () => { const batcher = new LogBatcher(3, 10000); const logReq = { payload: { message: 'Test log without file property', level: 'info' }, // file property is undefined }; const result = batcher.append(logReq); expect(result).toBeNull(); expect(batcher.batchSize).toBe(1); });
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (9)
README.md__tests__/log-batcher.spec.jsindex.d.tslib/commons/config.jslib/logs/batcher.jslib/logs/constants.jslib/logs/helpers.jslib/logs/index.jslib/report-portal-client.js
🧰 Additional context used
🧬 Code graph analysis (3)
lib/commons/config.js (3)
__tests__/log-batcher.spec.js (1)
require(2-2)lib/logs/index.js (2)
require(2-6)require(7-7)lib/logs/constants.js (2)
MAX_LOG_BATCH_SIZE(1-1)MAX_LOG_BATCH_PAYLOAD_SIZE(2-2)
__tests__/log-batcher.spec.js (3)
lib/logs/index.js (3)
LogBatcher(1-1)require(2-6)require(7-7)lib/logs/batcher.js (2)
require(1-1)require(2-2)lib/logs/constants.js (2)
MAX_LOG_BATCH_SIZE(1-1)MAX_LOG_BATCH_PAYLOAD_SIZE(2-2)
lib/logs/batcher.js (2)
lib/logs/index.js (2)
require(2-6)require(7-7)lib/logs/constants.js (2)
MAX_LOG_BATCH_SIZE(1-1)MAX_LOG_BATCH_PAYLOAD_SIZE(2-2)
🔇 Additional comments (17)
lib/logs/constants.js (1)
1-7: LGTM!Constants are well-defined with sensible defaults. The 20-entry batch size and 64 MB payload limit provide reasonable thresholds for batching behavior.
lib/logs/helpers.js (1)
38-45: LGTM!Clean composition of size calculations with appropriate boundary overhead.
lib/logs/batcher.js (2)
53-56: LGTM!Clean delegation to
appendInternalwith size calculation.
58-67: LGTM!Flush implementation correctly returns and resets the batch state.
lib/logs/index.js (1)
1-16: LGTM!Clean barrel module that correctly aggregates and re-exports all logging utilities for public API consumption.
README.md (2)
126-128: LGTM! Clear documentation for new batching configuration options.The documentation correctly describes the new configuration options with their default values and purpose. The default values align with the constants defined in the codebase.
598-636: LGTM! Comprehensive documentation for the flushLogs API.The documentation clearly explains when to use
flushLogs(), provides a practical code example, and lists the automatic flush triggers. This aligns well with the implementation.index.d.ts (2)
163-177: LGTM! Type definitions correctly added for batching configuration.The optional properties with appropriate types and JSDoc defaults align with the implementation.
407-442: LGTM! Well-documented flushLogs method declaration.The JSDoc provides comprehensive documentation with a practical usage example.
__tests__/log-batcher.spec.js (4)
1-21: LGTM! Good constructor test coverage.Tests verify both default values from constants and custom parameter initialization.
23-103: LGTM! Comprehensive append behavior tests.Good coverage of the core append scenarios: non-full batch, entry limit reached, payload limit exceeded, and file attachments.
105-181: LGTM! Thorough oversized log handling tests.The tests correctly verify behavior for oversized logs when batch is empty vs. non-empty, and consecutive oversized logs.
183-232: LGTM! Flush tests verify idempotency correctly.Good coverage of empty batch, non-empty batch, and second flush call scenarios.
lib/report-portal-client.js (4)
69-72: LGTM! Conditional batcher initialization.The LogBatcher is only created when
batchLogsis enabled, and correctly passes the configuration values.
284-326: LGTM! Good refactoring for launch finish flow with log flushing.The flow correctly flushes logs before finishing the launch, and proceeds with finishing even if flush fails (logging the error). This ensures the launch finishes regardless of flush errors.
1021-1034: LGTM! Clean implementation of flushLogs.The method correctly returns early if no batcher exists, and delegates to
sendLogBatchwhich handles null/empty batches gracefully.
762-764: LGTM! Clean routing through batcher when enabled.Both
sendLogWithoutFileandsendLogWithFilecorrectly check for the batcher and route appropriately.Also applies to: 807-809
- Fix oversized log being dropped when batch is non-empty - Use this.getUniqId() instead of UniqId() for consistency - Return consistent response shape from sendLogViaBatcher - Add JSDoc comments to batcher and helpers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
lib/report-portal-client.js (2)
12-12: Pre-existing: Module-level boundary reuse in concurrent scenarios.The
MULTIPART_BOUNDARYconstant is generated once at module load and reused for all non-batched log requests with files. While this is pre-existing code, consider whether concurrent file log requests could cause issues. The new batch code correctly generates unique boundaries per request (line 1000), which is the safer pattern.♻️ Optional: Generate boundary per request for consistency
-const MULTIPART_BOUNDARY = Math.floor(Math.random() * 10000000000).toString(); +const generateBoundary = () => Math.floor(Math.random() * 10000000000).toString();Then update
getRequestLogWithFileto generate a fresh boundary per call.
831-877: Document the fire-and-forget semantics of queued logs in batched logging.The queued log responses resolve with
{ queued: true }and don't receive the actual server response from the batch. This is intentional for batching performance, but callers should understand they cannot await the actual server response. TheflushLogs()method provides a way to force batch sending if needed.Update the JSDoc for
sendLogViaBatcherto clarify that promises for queued logs resolve with{ queued: true }and won't contain the server response, and document when/how to useflushLogs()if actual responses are required.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
__tests__/log-batcher.spec.jslib/logs/batcher.jslib/logs/helpers.jslib/report-portal-client.js
🚧 Files skipped from review as they are similar to previous changes (3)
- lib/logs/helpers.js
- tests/log-batcher.spec.js
- lib/logs/batcher.js
🔇 Additional comments (8)
lib/report-portal-client.js (8)
10-10: LGTM!The import correctly references the new logs module aggregation.
69-72: LGTM!Conditional initialization of the log batcher is clean and correctly passes the configuration parameters.
284-320: LGTM!The refactored finish flow correctly ensures logs are flushed before completing the launch. The error handling appropriately logs flush failures without blocking the launch completion, which is the right tradeoff for reliability.
762-764: LGTM!Clean conditional routing to the batcher path when batching is enabled.
807-809: LGTM!Consistent routing pattern with the file object properly passed for batch handling.
994-1019: LGTM!The batch sending logic is well-implemented:
- Proper guard for empty batches
- Unique boundary per request prevents collision issues
- Error is properly re-thrown for caller handling
1027-1034: LGTM!Clean implementation that safely handles both batching-disabled and empty-batch scenarios.
968-980: No action needed. The batch file attachment handling is correctly implemented.Multiple file parts can safely use the same form field name
name="file"because they are uniquely identified by theirfilenameattributes in the Content-Disposition headers. The correlation with log entries is established through order preservation: logRequests are iterated in the same sequence when building the payloads array and when adding file parts, and each payload includes file metadata (payload.file = { name: fileObj.name }). This is standard multipart/form-data format that ReportPortal's server will handle correctly.Likely an incorrect or invalid review comment.
|
@HardNorth I'm sorry to bother you again. I've reimplemented it based on your feedback. Could you please take another look? |
HardNorth
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix tests.
Add comprehensive test cases for batch logging to meet coverage thresholds (branches: 80%, functions: 75%).
done. |
|
I'm sorry to interrupt again. Can someone kind help me look at it again |
… logs - Validate batchLogsSize and batchPayloadLimit to ensure positive numbers - Add warning when single log exceeds payload limit - Update tests to match async sendLogViaBatcher behavior
|
@maria-hambardzumian Thank you for your time in pointing out the problem, I've fixed it. |
|
@maria-hambardzumian I'm sorry to bother you, could you please take another look? |
|
@AmsterGet can you take a look? |
Summary
Introduces a batching mechanism for log reporting to reduce network requests and improve performance.
Changes
batchLogs,batchLogsSize,batchPayloadLimitflushLogs()method to manually flush pending logsBenefits
Summary by CodeRabbit
New Features
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.